Implement CFD grid interpolation logic for validation#268
Implement CFD grid interpolation logic for validation#268ryancinsight wants to merge 3 commits intomainfrom
Conversation
…nal reference Co-authored-by: ryancinsight <55164720+ryancinsight@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the CFD validation framework by introducing robust grid interpolation capabilities, allowing for accurate comparisons between CFD Python results and external solutions even when their computational grids differ. It also improves the script's resilience by conditionally skipping 2D comparisons if the solver doesn't provide full field data, and updates solver parameter handling for better compatibility. These changes collectively make the validation process more flexible and less prone to failure due to grid discrepancies or incomplete solver outputs. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces interpolation logic to handle grid mismatches in the compare_cavity_external.py validation script, which is a necessary feature for comparing simulation results from different sources. The changes also improve robustness by gracefully handling cases where full 2D fields are not available from the Rust solver. The addition of __pycache__/ and venv/ to .gitignore is a good housekeeping change.
My review includes suggestions to further improve the new code. I've recommended refactoring the interpolation logic to reduce duplication, making it more maintainable. I've also suggested a more robust check for the presence of all required data fields to prevent potential AttributeError exceptions.
| if hasattr(result, "u_field"): | ||
| return { | ||
| "u": np.array(result.u_field), | ||
| "v": np.array(result.v_field), | ||
| "p": np.array(result.p_field), |
There was a problem hiding this comment.
The check for u_field improves robustness. However, it implicitly assumes that if u_field exists, v_field and p_field also exist. This could lead to an AttributeError if one of them is missing. It would be safer to explicitly check for all required fields before attempting to access them.
| if hasattr(result, "u_field"): | |
| return { | |
| "u": np.array(result.u_field), | |
| "v": np.array(result.v_field), | |
| "p": np.array(result.p_field), | |
| if all(hasattr(result, f) for f in ("u_field", "v_field", "p_field")): | |
| return { | |
| "u": np.array(result.u_field), | |
| "v": np.array(result.v_field), | |
| "p": np.array(result.p_field), |
| # Interpolate fields (transpose as RectBivariateSpline expects [x, y] ordering) | ||
| spline_u = RectBivariateSpline(x_orig, y_orig, cfd_python_result["u"].T) | ||
| spline_v = RectBivariateSpline(x_orig, y_orig, cfd_python_result["v"].T) | ||
| spline_p = RectBivariateSpline(x_orig, y_orig, cfd_python_result["p"].T) | ||
|
|
||
| # Evaluate on new grid (and transpose back to [y, x] ordering) | ||
| cfd_python_result["u"] = spline_u(x_new, y_new).T | ||
| cfd_python_result["v"] = spline_v(x_new, y_new).T | ||
| cfd_python_result["p"] = spline_p(x_new, y_new).T |
There was a problem hiding this comment.
To improve maintainability and reduce code duplication, the interpolation logic for the u, v, and p fields can be refactored into a loop. This makes the code more concise and easier to modify if more fields need interpolation in the future.
| # Interpolate fields (transpose as RectBivariateSpline expects [x, y] ordering) | |
| spline_u = RectBivariateSpline(x_orig, y_orig, cfd_python_result["u"].T) | |
| spline_v = RectBivariateSpline(x_orig, y_orig, cfd_python_result["v"].T) | |
| spline_p = RectBivariateSpline(x_orig, y_orig, cfd_python_result["p"].T) | |
| # Evaluate on new grid (and transpose back to [y, x] ordering) | |
| cfd_python_result["u"] = spline_u(x_new, y_new).T | |
| cfd_python_result["v"] = spline_v(x_new, y_new).T | |
| cfd_python_result["p"] = spline_p(x_new, y_new).T | |
| # Interpolate fields (transpose as RectBivariateSpline expects [x, y] ordering) | |
| for field_key in ("u", "v", "p"): | |
| spline = RectBivariateSpline(x_orig, y_orig, cfd_python_result[field_key].T) | |
| # Evaluate on new grid and transpose back to [y, x] ordering | |
| cfd_python_result[field_key] = spline(x_new, y_new).T |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughUpdated repository ignores for Python virtualenvs and bytecode. Enhanced validation logic in Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
validation/compare_cavity_external.py (1)
109-134: Interpolation logic looks correct; note that input dict is mutated.The transpose handling is correct: fields stored as
(ny, nx)are transposed to(nx, ny)forRectBivariateSpline, then transposed back after evaluation. The centerline recomputation also follows the correct row/column conventions.One minor observation: this directly mutates
cfd_python_resultrather than creating a copy. This is fine for the current single-use validation context, but if this function were reused, the caller's original data would be modified.Optional: work on a copy to avoid mutation
if cfd_python_result["u"].shape != ext_sol["u"].shape: print(f"INFO: Interpolating cfd_python results from {cfd_python_result['u'].shape} to external {ext_sol['u'].shape} grid") + # Work on a copy to preserve original data + cfd_python_result = cfd_python_result.copy() + # Original grid (assuming regular grid)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@validation/compare_cavity_external.py` around lines 109 - 134, The current interpolation mutates the input dictionary cfd_python_result (fields u,v,p, x,y and centerlines) in-place; to avoid side effects, create a shallow copy of cfd_python_result at the start of the interpolation block (e.g., copy = cfd_python_result.copy() or use dict(copy) or deep copy if nested) and perform all RectBivariateSpline setup and assignments (spline_u/spline_v/spline_p, evaluations, and updates to u_centerline/v_centerline/x/y) on that copy, then return or use the copy instead of overwriting the original cfd_python_result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@validation/compare_cavity_external.py`:
- Around line 109-134: The current interpolation mutates the input dictionary
cfd_python_result (fields u,v,p, x,y and centerlines) in-place; to avoid side
effects, create a shallow copy of cfd_python_result at the start of the
interpolation block (e.g., copy = cfd_python_result.copy() or use dict(copy) or
deep copy if nested) and perform all RectBivariateSpline setup and assignments
(spline_u/spline_v/spline_p, evaluations, and updates to
u_centerline/v_centerline/x/y) on that copy, then return or use the copy instead
of overwriting the original cfd_python_result.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 76dcd4c0-77f3-4d46-8167-e6f5e2a90577
📒 Files selected for processing (2)
.gitignorevalidation/compare_cavity_external.py
There was a problem hiding this comment.
Review by RecurseML
🔍 Review performed on 6be1767..5b674c7
✨ No bugs found, your code is sparkling clean
✅ Files analyzed, no issues (1)
• .gitignore
Co-authored-by: ryancinsight <55164720+ryancinsight@users.noreply.github.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/performance-benchmarking.yml (1)
83-91:⚠️ Potential issue | 🟠 Major
regression/productionsteps still call undeclared bench targets.
cargo bench --bench regression_detection(Line 86) andcargo bench --bench production_validation(Line 91) do not match declared bench targets inCargo.toml(context shows onlyperformance_benchmarks). Because Line 84 triggers regression on schedule events, weekly runs can fail consistently.#!/bin/bash set -euo pipefail echo "Workflow bench invocations:" rg -n 'cargo bench --bench' .github/workflows/performance-benchmarking.yml echo echo "Declared [[bench]] names in Cargo.toml:" python - <<'PY' import pathlib, re text = pathlib.Path("Cargo.toml").read_text(encoding="utf-8") blocks = re.findall(r'\[\[bench\]\](.*?)(?=\n\[\[|\Z)', text, flags=re.S) names = [] for b in blocks: m = re.search(r'^\s*name\s*=\s*"([^"]+)"', b, flags=re.M) if m: names.append(m.group(1)) print("\n".join(names) if names else "(none)") PYExpected result: workflow references
regression_detection/production_validation, while Cargo declares onlyperformance_benchmarks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/performance-benchmarking.yml around lines 83 - 91, The workflow steps call non-existent bench targets ("cargo bench --bench regression_detection" and "--bench production_validation"); update the Run regression detection benchmarks and Run production validation benchmarks steps to invoke the actual declared benchmark target (performance_benchmarks) or add matching [[bench]] entries in Cargo.toml; specifically change the cargo invocations referenced by the step names (Run regression detection benchmarks / Run production validation benchmarks) to use --bench performance_benchmarks (or ensure Cargo.toml contains names regression_detection and production_validation) so scheduled/conditional runs no longer fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In @.github/workflows/performance-benchmarking.yml:
- Around line 83-91: The workflow steps call non-existent bench targets ("cargo
bench --bench regression_detection" and "--bench production_validation"); update
the Run regression detection benchmarks and Run production validation benchmarks
steps to invoke the actual declared benchmark target (performance_benchmarks) or
add matching [[bench]] entries in Cargo.toml; specifically change the cargo
invocations referenced by the step names (Run regression detection benchmarks /
Run production validation benchmarks) to use --bench performance_benchmarks (or
ensure Cargo.toml contains names regression_detection and production_validation)
so scheduled/conditional runs no longer fail.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0c7e934d-7066-4293-90b3-48e5febc9ee7
📒 Files selected for processing (1)
.github/workflows/performance-benchmarking.yml
Co-authored-by: ryancinsight <55164720+ryancinsight@users.noreply.github.com>
This pull request implements the
TODO: Interpolate if neededinvalidation/compare_cavity_external.py.Changes:
scipy.interpolate.RectBivariateSplineto handle grid size mismatches between thecfd_pythonresult and the external reference solution.u_field,v_field,p_field) are not exported from the Rust solver viacfd_python.__pycache__/to.gitignore.PR created automatically by Jules for task 236727742202006950 started by @ryancinsight
High-level PR Summary
This PR implements CFD grid interpolation for validation by replacing a TODO placeholder with bivariate spline interpolation using
scipy.interpolate.RectBivariateSplineto handle grid size mismatches between the cfd_python solver results and external reference solutions. The validation script now gracefully handles cases where the Rust solver doesn't export full 2D fields, and adds common Python artifacts to.gitignore.⏱️ Estimated Review Time: 5-15 minutes
💡 Review Order Suggestion
.gitignorevalidation/compare_cavity_external.pySummary by CodeRabbit
Bug Fixes
Chores